124 research outputs found

    Depicting shape, materials and lighting: observation, formulation and implementation of artistic principles

    Get PDF
    The appearance of a scene results from complex interactions between the geometry, materials and lights that compose that scene. While Computer Graphics algorithms are now capable of simulating these interactions, it comes at the cost of tedious 3D modeling of a virtual scene, which only well-trained artists can do. In contrast, photographs allow the instantaneous capture of a scene, but shape, materials and lighting are difficult to manipulate directly in the image. Drawings can also suggest real or imaginary scenes with a few lines but creating convincing illustrations requires significant artistic skills.The goal of my research is to facilitate the creation and manipulation of shape, materials and lighting in drawings and photographs, for laymen and professional artists alike. This document first presents my work on computer-assisted drawing where I proposed algorithms to automate the depiction of materials in line drawings as well as to estimate a 3D model from design sketches. I also worked on user interfaces to assist beginners in learning traditional drawing techniques. Through the development of these projects I have formalized a general methodology to observe how artists work, deduce artistic principles from these observations, and implement these principles as algorithms. In the second part of this document I present my work on relighting multiple photographs of a scene, for which we first need to estimate the materials and lighting that compose that scene. The main novelty of our approach is to combine image analysis and lighting simulation in order to reason about the scene despite the lack of an accurate 3D model

    Line rendering of 3D meshes for data-driven sketch-based modeling

    Get PDF
    National audienceDeep learning recently achieved impressive successes on various computer vision tasks for which large amounts of training data is available, such as image classification. These successes have motivated the use of computer graphics to generate synthetic data for tasks where real data is difficult to collect. We present SynDraw, a non-photorealistic rendering system designed to ease the generation of synthetic drawings to train data-driven sketch-based modeling systems. SynDraw processes triangular meshes and extracts various types of synthetic lines, including occluding contours, suggestive contours, creases, and demarcating curves. SynDraw exports these lines as vector graphics to allow subsequent stylization. Finally, SynDraw can also export attributes of the lines, such as their 3D coordinates and their types, which can serve as ground truth for depth prediction or line labeling tasks. We provide both a command-line interface for batch processing, as well as an interactive viewer to explore and save line extraction parameters. We will release SynDraw as an open source library to support research in non-photorealistic rendering and sketch-based modeling

    Une approche basée donnée pour retrouver le point de vue d'un dessin de design

    Get PDF
    International audienceDesigning objects requires frequent transitions from a 2D representation, the sketch, to a 3D one. Because 3D modeling is time consuming, it is made only during late phases of the design process. Our long term goal is to allow designers to automatically generate 3D models from their sketches. In this paper, we address the preliminary step of recovering the viewpoint under which the object is drawn. We adopt a data-driven approach where we build correspondences between the sketch and 3D objects of the same class from a database. In particular, we relate the curvature lines and contours of the 3D objects to similar lines commonly drawn by designers. The 3D objects from the database are then used to vote for the best viewpoint. Our results on design sketches suggest that using both contours and curvature lines give higher precision than using either one. In particular, curvature information improves viewpoint retrieval when details of the objects are different from the sketch.Le processus de design d'objet nécessite de passer fréquemment d'une représentation 2D, le croquis, à une représentation 3D. Parce que cette transformation est couteuse en temps, elle n'est pratiquée que lorsque le design est suffisamment avancé. Nous proposons donc un premier pas vers des méthodes permettant au designer de générer automatiquement une vue 3D à partir d'un croquis simple, en utilisant les spécificités du dessin de design. Dans cet article, nous souhaitons dans un premier temps retrouver le point de vue selon lequel est dessiné l'objet. Nous adoptons une approche basée donnée en mettant en correspondance le dessin avec des objets 3D de la même classe. En particulier, nous relions les lignes de courbure et contours des objets 3D à des lignes similaires dessinées par les designers. Nos résultats sur des dessins de design suggèrent que l'utilisation de ces deux informations donnent une meilleure précision comparativement à n'utiliser que l'une des deux. En particulier, l'information de courbure permet d'améliorer l'alignement du point de vue quand les détails de l'objet sont différents du dessin

    User-assisted intrinsic images

    Get PDF
    For many computational photography applications, the lighting and materials in the scene are critical pieces of information. We seek to obtain intrinsic images, which decompose a photo into the product of an illumination component that represents lighting effects and a reflectance component that is the color of the observed material. This is an under-constrained problem and automatic methods are challenged by complex natural images. We describe a new approach that enables users to guide an optimization with simple indications such as regions of constant reflectance or illumination. Based on a simple assumption on local reflectance distributions, we derive a new propagation energy that enables a closed form solution using linear least-squares. We achieve fast performance by introducing a novel downsampling that preserves local color distributions. We demonstrate intrinsic image decomposition on a variety of images and show applications.National Science Foundation (U.S.) (NSF CAREER award 0447561)Institut national de recherche en informatique et en automatique (France) (Associate Research Team “Flexible Rendering”)Microsoft Research (New Faculty Fellowship)Alfred P. Sloan Foundation (Research Fellowship)Quanta Computer, Inc. (MIT-Quanta T Party

    Multi-Pose Interactive Linkage Design

    Get PDF
    International audienceWe introduce an interactive tool for novice users to design mechanical objects made of 2.5D linkages. Users simply draw the shape of the object and a few key poses of its multiple moving parts. Our approach automatically generates a one-degree-of-freedom linkage that connects the fixed and moving parts, such that the moving parts traverse all input poses in order without any collision with the fixed and other moving parts. In addition, our approach avoids common linkage defects and favors compact linkages and smooth motion trajectories. Finally, our system automatically generates the 3D geometry of the object and its links, allowing the rapid creation of a physical mockup of the designed object

    Flexible SVBRDF Capture with a Multi-Image Deep Network

    Get PDF
    International audienceEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of real-world materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images-a sweet spot between existing single-image and complex multi-image approaches

    Video Motion Stylization by 2D Rigidification

    Get PDF
    International audienceThis paper introduces a video stylization method that increases the apparent rigidity of motion. Existing stylization methods often retain the 3D motion of the original video, making the result look like a 3D scene covered in paint rather than a 2D painting of a scene. In contrast, traditional hand-drawn animations often exhibit simplified in-plane motion, such as in the case of cutout animations where the animator moves pieces of paper from frame to frame. Inspired by this technique, we propose to modify a video such that its content undergoes 2D rigid transforms. To achieve this goal, our approach applies motion segmentation and optimization to best approximate the input optical flow with piecewise-rigid transforms, and re-renders the video such that its content follows the simplified motion. The output of our method is a new video and its optical flow, which can be fed to any existing video stylization algorithm

    Textures volumiques auto-zoomables pour une stylisation temporellement cohérente en temps réel

    Get PDF
    National audienceLes méthodes de stylisation dont l'objectif est de représenter une scène 3D dynamique avec des marques 2D comme des pigments ou des coups de pinceau, sont généralement confrontées au problème de la cohérence temporelle. Dans cet article, nous présentons une méthode de stylisation en temps réel et temporellement cohérente basée sur des textures : les textures auto-zoomables. Une texture auto-zoomable est une texture plaquée sur les objets 3D et enrichie d'un nouveau mécanisme de zoom infini. Ce mécanisme maintient une taille quasi constante en espace image des éléments de texture. Lors de la stylisation, ce mécanisme renforce l'apparence 2D des marques de style tout en restant fidèle au mouvement 3D des objets représentés. Nous illustrons cette propriété par une variété de styles comme l'aquarelle ou le rendu par marques binaires. Bien que notre technique de zoom infini puisse être utilisée aussi bien pour des textures 2D que 3D, nous nous sommes attachés dans cet article au cas 3D (que nous appelons textures volumiques auto-zoomables), ce qui évite la définition d'une paramétrisation des surfaces 3D. En intégrant notre méthode à un moteur de rendu, nous validons la pertinence de ce compromis entre qualité et rapidité

    WrapIt: Computer-Assisted Crafting of Wire Wrapped Jewelry

    Get PDF
    International audienceWire wrapping is a traditional form of handmade jewelry that involves bending metal wire to create intricate shapes. The technique appeals to novices and casual crafters because of its low cost, accessibility and unique aesthetic. We present a computational design tool that addresses the two main challenges of creating 2D wire-wrapped jewelry: decomposing an input drawing into a set of wires, and bending the wires to give them shape. Our main contribution is an automatic wire decomposition algorithm that segments a drawing into a small number of wires based on aesthetic and fabrication principles. We formulate the task as a constrained graph labeling problem and present a stochastic optimization approach that produces good results for a variety of inputs. Given a decomposition, our system generates a 3D-printed custom support structure, or jig, that helps users bend the wire into the appropriate shape. We validated our wire decomposition algorithm against existing wire-wrapped designs, and used our end-to-end system to create new jewelry from clipart drawings. We also evaluated our approach with novice users, who were able to create various pieces of jewelry in less than half an hour
    • …
    corecore